ELEN6887 Lecture 12: Maximum Likelihood Estimation

نویسنده

  • R. Castro
چکیده

We immediately notice the similarity between the empirical risk we had seen before and the negative loglikelihood. We will see that we can regard maximum likelihood estimation as our familiar minimal empirical risk when the loss function is chosen appropriately. In the meantime note that minimizing (1) yields our familiar square-error loss if Wi’s are Gaussian. If the Wi’s are Laplacian (pW (w) ∝ e−c|w|) we get the sum of absolute errors. We can also consider non-additive models like the Poisson model (used often in medical imaging applications, like PET imaging)

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ELEN6887 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

Yi i.i.d. ∼ pθ∗ , i = {1, . . . , n} where θ∗ ∈ Θ. We can view pθ∗ as a member of a parametric class of distributions, P = {pθ}θ∈Θ. Our goal is to use the observations {Yi} to select an appropriate distribution (e.g., model) from P. We would like the selected distribution to be close to pθ in some sense. We use the negative log-likelihood loss function, defined as l(θ, Yi) = − log pθ(Yi). The e...

متن کامل

ELEN6887 Lecture 13: Maximum Likelihood Estimation and Complexity Regularization

Yi i.i.d. ∼ pθ∗ , i = {1, . . . , n} where θ∗ ∈ Θ. We can view pθ∗ as a member of a parametric class of distributions, P = {pθ}θ∈Θ. Our goal is to use the observations {Yi} to select an appropriate distribution (e.g., model) from P. We would like the selected distribution to be close to pθ in some sense. We use the negative log-likelihood loss function, defined as l(θ, Yi) = − log pθ(Yi). The e...

متن کامل

ELEN6887 Lecture 15: Denoising Smooth Functions with Unknown Smoothness

Lipschitz functions are interesting, but can be very rough (these can have many kinks). In many situations the functions can be much smoother. This is how you would model the temperature inside a museum room for example. Often we don’t know how smooth the function might be, so an interesting question is if we can adapt to the unknown smoothness. In this lecture we will use the Maximum Complexit...

متن کامل

ELEN6887 Lecture 14: Denoising Smooth Functions with Unknown Smoothness

Lipschitz functions are interesting, but can be very rough (these can have many kinks). In many situations the functions can be much smoother. This is how you would model the temperature inside a museum room for example. Often we don’t know how smooth the function might be, so an interesting question is if we can adapt to the unknown smoothness. In this lecture we will use the Maximum Complexit...

متن کامل

Lecture 8 : Properties of Maximum Likelihood Estimation ( MLE ) ( LaTeX prepared by Haiguang Wen )

Maximum Likelihood Estimation (MLE) is a widely used statistical estimation method. In this lecture, we will study its properties: efficiency, consistency and asymptotic normality. MLE is a method for estimating parameters of a statistical model. Given the distribution of a statistical model f(y ; θ) with unkown deterministic parameter θ, MLE is to estimate the parameter θ by maximizing the pro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010